Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
BMC Oral Health ; 24(1): 490, 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38658959

RESUMO

BACKGROUND: Deep learning model trained on a large image dataset, can be used to detect and discriminate targets with similar but not identical appearances. The aim of this study is to evaluate the post-training performance of the CNN-based YOLOv5x algorithm in the detection of white spot lesions in post-orthodontic oral photographs using the limited data available and to make a preliminary study for fully automated models that can be clinically integrated in the future. METHODS: A total of 435 images in JPG format were uploaded into the CranioCatch labeling software and labeled white spot lesions. The labeled images were resized to 640 × 320 while maintaining their aspect ratio before model training. The labeled images were randomly divided into three groups (Training:349 images (1589 labels), Validation:43 images (181 labels), Test:43 images (215 labels)). YOLOv5x algorithm was used to perform deep learning. The segmentation performance of the tested model was visualized and analyzed using ROC analysis and a confusion matrix. True Positive (TP), False Positive (FP), and False Negative (FN) values were determined. RESULTS: Among the test group images, there were 133 TPs, 36 FPs, and 82 FNs. The model's performance metrics include precision, recall, and F1 score values of detecting white spot lesions were 0.786, 0.618, and 0.692. The AUC value obtained from the ROC analysis was 0.712. The mAP value obtained from the Precision-Recall curve graph was 0.425. CONCLUSIONS: The model's accuracy and sensitivity in detecting white spot lesions remained lower than expected for practical application, but is a promising and acceptable detection rate compared to previous study. The current study provides a preliminary insight to further improved by increasing the dataset for training, and applying modifications to the deep learning algorithm. CLINICAL REVELANCE: Deep learning systems can help clinicians to distinguish white spot lesions that may be missed during visual inspection.


Assuntos
Algoritmos , Aprendizado Profundo , Humanos , Projetos Piloto , Fotografia Dentária/métodos , Processamento de Imagem Assistida por Computador/métodos , Brancos
2.
Artigo em Inglês | MEDLINE | ID: mdl-38632035

RESUMO

OBJECTIVE: The aim of this study is to assess the efficacy of employing a deep learning methodology for the automated identification and enumeration of permanent teeth in bitewing radiographs. The experimental procedures and techniques employed in this study are described in the following section. STUDY DESIGN: A total of 1248 bitewing radiography images were annotated using the CranioCatch labeling program, developed in Eskisehir, Turkey. The dataset has been partitioned into 3 subsets: training (n = 1000, 80% of the total), validation (n = 124, 10% of the total), and test (n = 124, 10% of the total) sets. The images were subjected to a 3 × 3 clash operation in order to enhance the clarity of the labeled regions. RESULTS: The F1, sensitivity and precision results of the artificial intelligence model obtained using the Yolov5 architecture in the test dataset were found to be 0.9913, 0.9954, and 0.9873, respectively. CONCLUSION: The utilization of numerical identification for teeth within deep learning-based artificial intelligence algorithms applied to bitewing radiographs has demonstrated notable efficacy. The utilization of clinical decision support system software, which is augmented by artificial intelligence, has the potential to enhance the efficiency and effectiveness of dental practitioners.

3.
J Stomatol Oral Maxillofac Surg ; : 101817, 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38458545

RESUMO

OBJECTIVE: The aim of this study is to determine if a deep learning (DL) model can predict the surgical difficulty for impacted maxillary third molar tooth using panoramic images before surgery. MATERIALS AND METHODS: The dataset consists of 708 panoramic radiographs of the patients who applied to the Oral and Maxillofacial Surgery Clinic for various reasons. Each maxillary third molar difficulty was scored based on dept (V), angulation (H), relation with maxillary sinus (S), and relation with ramus (R) on panoramic images. The YoloV5x architecture was used to perform automatic segmentation and classification. To prevent re-testing of images, participate in the training, the data set was subdivided as: 80 % training, 10 % validation, and 10 % test group. RESULTS: Impacted Upper Third Molar Segmentation model showed best success on sensitivity, precision and F1 score with 0,9705, 0,9428 and 0,9565, respectively. S-model had a lesser sensitivity, precision and F1 score than the other models with 0,8974, 0,6194, 0,7329, respectively. CONCLUSION: The results showed that the proposed DL model could be effective for predicting the surgical difficulty of an impacted maxillary third molar tooth using panoramic radiographs and this approach might help as a decision support mechanism for the clinicians in peri­surgical period.

4.
Artigo em Inglês | MEDLINE | ID: mdl-38502963

RESUMO

OBJECTIVES: The study aims to develop an artificial intelligence (AI) model based on nnU-Net v2 for automatic maxillary sinus (MS) segmentation in Cone Beam Computed Tomography (CBCT) volumes and to evaluate the performance of this model. METHODS: In 101 CBCT scans, MS were annotated using the CranioCatch labelling software (Eskisehir, Turkey) The dataset was divided into three parts: 80 CBCT scans for training the model, 11 CBCT scans for model validation, and 10 CBCT scans for testing the model. The model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.00001 for 1000 epochs. The performance of the model to automatically segment the MS on CBCT scans was assessed by several parameters, including F1-score, accuracy, sensitivity, precision, Area Under Curve (AUC), Dice Coefficient (DC), 95% Hausdorff Distance (95% HD), and Intersection over Union (IoU) values. RESULTS: F1-score, accuracy, sensitivity, precision values were found to be 0.96, 0.99, 0.96, 0.96 respectively for the successful segmentation of maxillary sinus in CBCT images. AUC, DC, 95% HD, IoU values were 0.97, 0.96, 1.19, 0.93, respectively. CONCLUSIONS: Models based on nnU-Net v2 demonstrate the ability to segment the MS autonomously and accurately in CBCT images.

5.
BMC Oral Health ; 24(1): 155, 2024 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-38297288

RESUMO

BACKGROUND: This retrospective study aimed to develop a deep learning algorithm for the interpretation of panoramic radiographs and to examine the performance of this algorithm in the detection of periodontal bone losses and bone loss patterns. METHODS: A total of 1121 panoramic radiographs were used in this study. Bone losses in the maxilla and mandibula (total alveolar bone loss) (n = 2251), interdental bone losses (n = 25303), and furcation defects (n = 2815) were labeled using the segmentation method. In addition, interdental bone losses were divided into horizontal (n = 21839) and vertical (n = 3464) bone losses according to the defect patterns. A Convolutional Neural Network (CNN)-based artificial intelligence (AI) system was developed using U-Net architecture. The performance of the deep learning algorithm was statistically evaluated by the confusion matrix and ROC curve analysis. RESULTS: The system showed the highest diagnostic performance in the detection of total alveolar bone losses (AUC = 0.951) and the lowest in the detection of vertical bone losses (AUC = 0.733). The sensitivity, precision, F1 score, accuracy, and AUC values were found as 1, 0.995, 0.997, 0.994, 0.951 for total alveolar bone loss; found as 0.947, 0.939, 0.943, 0.892, 0.910 for horizontal bone losses; found as 0.558, 0.846, 0.673, 0.506, 0.733 for vertical bone losses and found as 0.892, 0.933, 0.912, 0.837, 0.868 for furcation defects (respectively). CONCLUSIONS: AI systems offer promising results in determining periodontal bone loss patterns and furcation defects from dental radiographs. This suggests that CNN algorithms can also be used to provide more detailed information such as automatic determination of periodontal disease severity and treatment planning in various dental radiographs.


Assuntos
Perda do Osso Alveolar , Aprendizado Profundo , Defeitos da Furca , Humanos , Perda do Osso Alveolar/diagnóstico por imagem , Radiografia Panorâmica/métodos , Estudos Retrospectivos , Defeitos da Furca/diagnóstico por imagem , Inteligência Artificial , Algoritmos
6.
Odontology ; 2023 Oct 31.
Artigo em Inglês | MEDLINE | ID: mdl-37907818

RESUMO

The objective of this study is to use a deep-learning model based on CNN architecture to detect the second mesiobuccal (MB2) canals, which are seen as a variation in maxillary molars root canals. In the current study, 922 axial sections from 153 patients' cone beam computed tomography (CBCT) images were used. The segmentation method was employed to identify the MB2 canals in maxillary molars that had not previously had endodontic treatment. Labeled images were divided into training (80%), validation (10%) and testing (10%) groups. The artificial intelligence (AI) model was trained using the You Only Look Once v5 (YOLOv5x) architecture with 500 epochs and a learning rate of 0.01. Confusion matrix and receiver-operating characteristic (ROC) analysis were used in the statistical evaluation of the results. The sensitivity of the MB2 canal segmentation model was 0.92, the precision was 0.83, and the F1 score value was 0.87. The area under the curve (AUC) in the ROC graph of the model was 0.84. The mAP value at 0.5 inter-over union (IoU) was found as 0.88. The deep-learning algorithm used showed a high success in the detection of the MB2 canal. The success of the endodontic treatment can be increased and clinicians' time can be preserved using the newly created artificial intelligence-based models to identify variations in root canal anatomy before the treatment.

7.
BMC Oral Health ; 23(1): 764, 2023 10 17.
Artigo em Inglês | MEDLINE | ID: mdl-37848870

RESUMO

BACKGROUND: Panoramic radiographs, in which anatomic landmarks can be observed, are used to detect cases closely related to pediatric dentistry. The purpose of the study is to investigate the success and reliability of the detection of maxillary and mandibular anatomic structures observed on panoramic radiographs in children using artificial intelligence. METHODS: A total of 981 mixed images of pediatric patients for 9 different pediatric anatomic landmarks including maxillary sinus, orbita, mandibular canal, mental foramen, foramen mandible, incisura mandible, articular eminence, condylar and coronoid processes were labelled, the training was carried out using 2D convolutional neural networks (CNN) architectures, by giving 500 training epochs and Pytorch-implemented YOLO-v5 models were produced. The success rate of the AI model prediction was tested on a 10% test data set. RESULTS: A total of 14,804 labels including maxillary sinus (1922), orbita (1944), mandibular canal (1879), mental foramen (884), foramen mandible (1885), incisura mandible (1922), articular eminence (1645), condylar (1733) and coronoid (990) processes were made. The most successful F1 Scores were obtained from orbita (1), incisura mandible (0.99), maxillary sinus (0.98), and mandibular canal (0.97). The best sensitivity values were obtained from orbita, maxillary sinus, mandibular canal, incisura mandible, and condylar process. The worst sensitivity values were obtained from mental foramen (0.92) and articular eminence (0.92). CONCLUSIONS: The regular and standardized labelling, the relatively larger areas, and the success of the YOLO-v5 algorithm contributed to obtaining these successful results. Automatic segmentation of these structures will save time for physicians in clinical diagnosis and will increase the visibility of pathologies related to structures and the awareness of physicians.


Assuntos
Pontos de Referência Anatômicos , Inteligência Artificial , Humanos , Criança , Radiografia Panorâmica/métodos , Pontos de Referência Anatômicos/diagnóstico por imagem , Reprodutibilidade dos Testes , Mandíbula/diagnóstico por imagem
8.
J Oral Implantol ; 49(4): 344-345, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37527149
9.
Tuberk Toraks ; 71(2): 131-137, 2023 Jun.
Artigo em Turco | MEDLINE | ID: mdl-37345395

RESUMO

Introduction: Pulmonary embolism is a type of thromboembolism seen in the main pulmonary artery and its branches. This study aimed to diagnose acute pulmonary embolism using the deep learning method in computed tomographic pulmonary angiography (CTPA) and perform the segmentation of pulmonary embolism data. Materials and Methods: The CTPA images of patients diagnosed with pulmonary embolism who underwent scheduled imaging were retrospectively evaluated. After data collection, the areas that were diagnosed as embolisms in the axial section images were segmented. The dataset was divided into three parts: training, validation, and testing. The results were calculated by selecting 50% as the cut-off value for the intersection over the union. Result: Images were obtained from 1.550 patients. The mean age of the patients was 64.23 ± 15.45 years. A total of 2.339 axial computed tomography images obtained from the 1.550 patients were used. The PyTorch U-Net was used to train 400 epochs, and the best model, epoch 178, was recorded. In the testing group, the number of true positives was determined as 471, the number of false positives as 35, and 27 cases were not detected. The sensitivity of CTPA segmentation was 0.95, the precision value was 0.93, and the F1 score value was 0.94. The area under the curve value obtained in the receiver operating characteristic analysis was calculated as 0.88. Conclusions: In this study, the deep learning method was successfully employed for the segmentation of acute pulmonary embolism in CTPA, yielding positive outcomes.


Assuntos
Aprendizado Profundo , Embolia Pulmonar , Humanos , Pessoa de Meia-Idade , Idoso , Estudos Retrospectivos , Embolia Pulmonar/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Doença Aguda , Angiografia/métodos
10.
Quintessence Int ; 54(8): 680-693, 2023 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-37313576

RESUMO

OBJECTIVES: This study aimed to develop an artificial intelligence (AI) model that can determine automatic tooth numbering, frenulum attachments, gingival overgrowth areas, and gingival inflammation signs on intraoral photographs and to evaluate the performance of this model. METHOD AND MATERIALS: A total of 654 intraoral photographs were used in the study (n = 654). All photographs were reviewed by three periodontists, and all teeth, frenulum attachment, gingival overgrowth areas, and gingival inflammation signs on photographs were labeled using the segmentation method in a web-based labeling software. In addition, tooth numbering was carried out according to the FDI system. An AI model was developed with the help of YOLOv5x architecture with labels of 16,795 teeth, 2,493 frenulum attachments, 1,211 gingival overgrowth areas, and 2,956 gingival inflammation signs. The confusion matrix system and ROC (receiver operator characteristic) analysis were used to statistically evaluate the success of the developed model. RESULTS: The sensitivity, precision, F1 score, and AUC (area under the curve) for tooth numbering were 0.990, 0.784, 0.875, and 0.989; for frenulum attachment these were 0.894, 0.775, 0.830, and 0.827; for gingival overgrowth area these were 0.757, 0.675, 0.714, and 0.774; and for gingival inflammation sign 0.737, 0.823, 0.777, and 0.802, respectively. CONCLUSION: The results of the present study show that AI systems can be successfully used to interpret intraoral photographs. These systems have the potential to accelerate the digital transformation in the clinical and academic functioning of dentistry with the automatic determination of anatomical structures and dental conditions from intraoral photographs.


Assuntos
Crescimento Excessivo da Gengiva , Gengivite , Dente , Humanos , Estudos Retrospectivos , Inteligência Artificial , Gengivite/diagnóstico , Redes Neurais de Computação , Algoritmos , Inflamação
11.
J Oral Rehabil ; 50(9): 758-766, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37186400

RESUMO

BACKGROUND: The use of artificial intelligence has many advantages, especially in the field of oral and maxillofacial radiology. Early diagnosis of temporomandibular joint osteoarthritis by artificial intelligence may improve prognosis. OBJECTIVE: The aim of this study is to perform the classification of temporomandibular joint (TMJ) osteoarthritis and TMJ segmentation on cone beam computed tomography (CBCT) sagittal images with artificial intelligence. METHODS: In this study, the success of YOLOv5 architecture, an artificial intelligence model, in TMJ segmentation and osteoarthritis classification was evaluated on 2000 sagittal sections (500 healthy, 500 erosion, 500 osteophyte, 500 flattening images) obtained from CBCT DICOM images of 290 patients. RESULTS: The sensitivity, precision and F1 scores of the model for TMJ osteoarthritis classification are 1, 0.7678 and 0.8686, respectively. The accuracy value for classification is 0.7678. The prediction values of the classification model are 88% for healthy joints, 70% for flattened joints, 95% for joints with erosion and 86% for joints with osteophytes. The sensitivity, precision and F1 score of the YOLOv5 model for TMJ segmentation are 1, 0.9953 and 0.9976, respectively. The AUC value of the model for TMJ segmentation is 0.9723. In addition, the accuracy value of the model for TMJ segmentation was found to be 0.9953. CONCLUSION: Artificial intelligence model applied in this study can be a support method that will save time and convenience for physicians in the diagnosis of the disease with successful results in TMJ segmentation and osteoarthritis classification.


Assuntos
Osteoartrite , Transtornos da Articulação Temporomandibular , Humanos , Transtornos da Articulação Temporomandibular/diagnóstico por imagem , Inteligência Artificial , Articulação Temporomandibular/diagnóstico por imagem , Tomografia Computadorizada de Feixe Cônico/métodos , Osteoartrite/diagnóstico por imagem
12.
Diagnostics (Basel) ; 13(10)2023 May 19.
Artigo em Inglês | MEDLINE | ID: mdl-37238284

RESUMO

The assessment of alveolar bone loss, a crucial element of the periodontium, plays a vital role in the diagnosis of periodontitis and the prognosis of the disease. In dentistry, artificial intelligence (AI) applications have demonstrated practical and efficient diagnostic capabilities, leveraging machine learning and cognitive problem-solving functions that mimic human abilities. This study aims to evaluate the effectiveness of AI models in identifying alveolar bone loss as present or absent across different regions. To achieve this goal, alveolar bone loss models were generated using the PyTorch-based YOLO-v5 model implemented via CranioCatch software, detecting periodontal bone loss areas and labeling them using the segmentation method on 685 panoramic radiographs. Besides general evaluation, models were grouped according to subregions (incisors, canines, premolars, and molars) to provide a targeted evaluation. Our findings reveal that the lowest sensitivity and F1 score values were associated with total alveolar bone loss, while the highest values were observed in the maxillary incisor region. It shows that artificial intelligence has a high potential in analytical studies evaluating periodontal bone loss situations. Considering the limited amount of data, it is predicted that this success will increase with the provision of machine learning by using a more comprehensive data set in further studies.

13.
Diagnostics (Basel) ; 13(4)2023 Feb 04.
Artigo em Inglês | MEDLINE | ID: mdl-36832069

RESUMO

This study aims to develop an algorithm for the automatic segmentation of the parotid gland on CT images of the head and neck using U-Net architecture and to evaluate the model's performance. In this retrospective study, a total of 30 anonymized CT volumes of the head and neck were sliced into 931 axial images of the parotid glands. Ground truth labeling was performed with the CranioCatch Annotation Tool (CranioCatch, Eskisehir, Turkey) by two oral and maxillofacial radiologists. The images were resized to 512 × 512 and split into training (80%), validation (10%), and testing (10%) subgroups. A deep convolutional neural network model was developed using U-net architecture. The automatic segmentation performance was evaluated in terms of the F1-score, precision, sensitivity, and the Area Under Curve (AUC) statistics. The threshold for a successful segmentation was determined by the intersection of over 50% of the pixels with the ground truth. The F1-score, precision, and sensitivity of the AI model in segmenting the parotid glands in the axial CT slices were found to be 1. The AUC value was 0.96. This study has shown that it is possible to use AI models based on deep learning to automatically segment the parotid gland on axial CT images.

14.
Oral Radiol ; 39(1): 207-214, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35612677

RESUMO

OBJECTIVES: Artificial intelligence (AI) techniques like convolutional neural network (CNN) are a promising breakthrough that can help clinicians analyze medical imaging, diagnose taurodontism, and make therapeutic decisions. The purpose of the study is to develop and evaluate the function of CNN-based AI model to diagnose teeth with taurodontism in panoramic radiography. METHODS: 434 anonymized, mixed-sized panoramic radiography images over the age of 13 years were used to develop automatic taurodont tooth segmentation models using a Pytorch implemented U-Net model. Datasets were split into train, validation, and test groups of both normal and masked images. The data augmentation method was applied to images of trainings and validation groups with vertical flip images, horizontal flip images, and both flip images. The Confusion Matrix was used to determine the model performance. RESULTS: Among the 43 test group images with 126 labels, there were 109 true positives, 29 false positives, and 17 false negatives. The sensitivity, precision, and F1-score values of taurodont tooth segmentation were 0.8650, 0.7898, and 0.8257, respectively. CONCLUSIONS: CNN's ability to identify taurodontism produced almost identical results to the labeled training data, and the CNN system achieved close to the expert level results in its ability to detect the taurodontism of teeth.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Radiografia Panorâmica , Redes Neurais de Computação , Algoritmos
15.
J Stomatol Oral Maxillofac Surg ; 124(1): 101264, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35964938

RESUMO

INTRODUCTION: Deep learning methods have recently been applied for the processing of medical images, and they have shown promise in a variety of applications. This study aimed to develop a deep learning approach for identifying oral lichen planus lesions using photographic images. MATERIAL AND METHODS: Anonymous retrospective photographic images of buccal mucosa with 65 healthy and 72 oral lichen planus lesions were identified using the CranioCatch program (CranioCatch, Eskisehir, Turkey). All images were re-checked and verified by Oral Medicine and Maxillofacial Radiology experts. This data set was divided into training (n = 51; n = 58), verification (n = 7; n = 7), and test (n = 7; n = 7) sets for healthy mucosa and mucosa with the oral lichen planus lesion, respectively. In the study, an artificial intelligence model was developed using Google Inception V3 architecture implemented with Tensorflow, which is a deep learning approach. RESULTS: AI deep learning model provided the classification of all test images for both healthy and diseased mucosa with a 100% success rate. CONCLUSION: In the healthcare business, AI offers a wide range of uses and applications. The increased effort increased complexity of the job, and probable doctor fatigue may jeopardize diagnostic abilities and results. Artificial intelligence (AI) components in imaging equipment would lessen this effort and increase efficiency. They can also detect oral lesions and have access to more data than their human counterparts. Our preliminary findings show that deep learning has the potential to handle this significant challenge.


Assuntos
Aprendizado Profundo , Líquen Plano Bucal , Humanos , Líquen Plano Bucal/diagnóstico , Líquen Plano Bucal/patologia , Estudos Retrospectivos , Inteligência Artificial , Algoritmos
16.
J Ultrason ; 22(91): e204-e208, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36483782

RESUMO

Aim: Deep learning algorithms have lately been used for medical image processing, and they have showed promise in a range of applications. The purpose of this study was to develop and test computer-based diagnostic tools for evaluating masseter muscle segmentation on ultrasonography images. Materials and methods: A total of 388 anonymous adult masseter muscle retrospective ultrasonographic images were evaluated. The masseter muscle was labeled on ultrasonography images using the polygonal type labeling method with the CranioCatch labeling program (CranioCatch, Eskisehir, Turkey). All images were re-checked and verified by Oral and Maxillofacial Radiology experts. This data set was divided into training (n = 312), verification (n = 38) and test (n = 38) sets. In the study, an artificial intelligence model was developed using PyTorch U-Net architecture, which is a deep learning approach. Results: In our study, the artificial intelligence deep learning model known as U-net provided the detection and segmentation of all test images, and when the success rate in the estimation of the images was evaluated, the F1, sensitivity and precision results of the model were 1.0, 1.0 and 1.0, respectively. Conclusion: Artificial intelligence shows promise in automatic segmentation of masseter muscle on ultrasonography images. This strategy can aid surgeons, radiologists, and other medical practitioners in reducing diagnostic time.

17.
Diagnostics (Basel) ; 12(12)2022 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-36553088

RESUMO

While a large number of archived digital images make it easy for radiology to provide data for Artificial Intelligence (AI) evaluation; AI algorithms are more and more applied in detecting diseases. The aim of the study is to perform a diagnostic evaluation on periapical radiographs with an AI model based on Convoluted Neural Networks (CNNs). The dataset includes 1169 adult periapical radiographs, which were labelled in CranioCatch annotation software. Deep learning was performed using the U-Net model implemented with the PyTorch library. The AI models based on deep learning models improved the success rate of carious lesion, crown, dental pulp, dental filling, periapical lesion, and root canal filling segmentation in periapical images. Sensitivity, precision and F1 scores for carious lesion were 0.82, 0.82, and 0.82, respectively; sensitivity, precision and F1 score for crown were 1, 1, and 1, respectively; sensitivity, precision and F1 score for dental pulp, were 0.97, 0.87 and 0.92, respectively; sensitivity, precision and F1 score for filling were 0.95, 0.95, and 0.95, respectively; sensitivity, precision and F1 score for the periapical lesion were 0.92, 0.85, and 0.88, respectively; sensitivity, precision and F1 score for root canal filling, were found to be 1, 0.96, and 0.98, respectively. The success of AI algorithms in evaluating periapical radiographs is encouraging and promising for their use in routine clinical processes as a clinical decision support system.

18.
Pol J Radiol ; 87: e516-e520, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36250137

RESUMO

Purpose: Magnetic resonance imaging (MRI) has a special place in the evaluation of orbital and periorbital lesions. Segmentation is one of the deep learning methods. In this study, we aimed to perform segmentation in orbital and periorbital lesions. Material and methods: Contrast-enhanced orbital MRIs performed between 2010 and 2019 were retrospectively screened, and 302 cross-sections of contrast-enhanced, fat-suppressed, T1-weighted, axial MRI images of 95 patients obtained using 3 T and 1.5 T devices were included in the study. The dataset was divided into 3: training, test, and validation. The number of training and validation data was increased 4 times by applying data augmentation (horizontal, vertical, and both). Pytorch UNet was used for training, with 100 epochs. The intersection over union (IOU) statistic (the Jaccard index) was selected as 50%, and the results were calculated. Results: The 77th epoch model provided the best results: true positives, 23; false positives, 4; and false negatives, 8. The pre-cision, sensitivity, and F1 score were determined as 0.85, 0.74, and 0.79, respectively. Conclusions: Our study proved to be successful in segmentation by deep learning method. It is one of the pioneering studies on this subject and will shed light on further segmentation studies to be performed in orbital MR images.

19.
Med Princ Pract ; 31(6): 555-561, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36167054

RESUMO

OBJECTIVE: The purpose of the study was to create an artificial intelligence (AI) system for detecting idiopathic osteosclerosis (IO) on panoramic radiographs for automatic, routine, and simple evaluations. SUBJECT AND METHODS: In this study, a deep learning method was carried out with panoramic radiographs obtained from healthy patients. A total of 493 anonymized panoramic radiographs were used to develop the AI system (CranioCatch, Eskisehir, Turkey) for the detection of IOs. The panoramic radiographs were acquired from the radiology archives of the Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University. GoogLeNet Inception v2 model implemented with TensorFlow library was used for the detection of IOs. Confusion matrix was used to predict model achievements. RESULTS: Fifty IOs were detected accurately by the AI model from the 52 test images which had 57 IOs. The sensitivity, precision, and F-measure values were 0.88, 0.83, and 0.86, respectively. CONCLUSION: Deep learning-based AI algorithm has the potential to detect IOs accurately on panoramic radiographs. AI systems may reduce the workload of dentists in terms of diagnostic efforts.


Assuntos
Aprendizado Profundo , Osteosclerose , Humanos , Inteligência Artificial , Radiografia Panorâmica , Algoritmos , Osteosclerose/diagnóstico por imagem
20.
Diagnostics (Basel) ; 12(9)2022 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-36140645

RESUMO

The present study aims to validate the diagnostic performance and evaluate the reliability of an artificial intelligence system based on the convolutional neural network method for the morphological classification of sella turcica in CBCT (cone-beam computed tomography) images. In this retrospective study, sella segmentation and classification models (CranioCatch, Eskisehir, Türkiye) were applied to sagittal slices of CBCT images, using PyTorch supported by U-Net and TensorFlow 1, and we implemented the GoogleNet Inception V3 algorithm. The AI models achieved successful results for sella turcica segmentation of CBCT images based on the deep learning models. The sensitivity, precision, and F-measure values were 1.0, 1.0, and 1.0, respectively, for segmentation of sella turcica in sagittal slices of CBCT images. The sensitivity, precision, accuracy, and F1-score were 1.0, 0.95, 0.98, and 0.84, respectively, for sella-turcica-flattened classification; 0.95, 0.83, 0.92, and 0.88, respectively, for sella-turcica-oval classification; 0.75, 0.94, 0.90, and 0.83, respectively, for sella-turcica-round classification. It is predicted that detecting anatomical landmarks with orthodontic importance, such as the sella point, with artificial intelligence algorithms will save time for orthodontists and facilitate diagnosis.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...